skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "George, Samuel"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The success of GPT with coding tasks has made it important to consider the impact of GPT and similar models on teaching programming. Students’ use of GPT to solve programming problems can hinder their learning. However, they might also get significant benefits such as quality feedback on programming style, explanations of howa given piece of codeworks, helpwith debugging code, and the ability to see valuable alternatives to their code solutions. We propose a newdesign for interactingwith GPT calledMediated GPT with the goals of (a) providing students with access to GPT but allowing instructors to programmatically modify responses to prevent hindrances to student learning and combat common GPT response concerns, (b) helping students generate and learn to create effective prompts to GPT, and (c) tracking how students use GPT to get help on programming exercises. We demonstrate a first-pass implementation of this design called NotebookGPT. 
    more » « less
  2. As interest in programming as a major grows, instructors must accommodate more students in their programming courses. One particularly challenging aspect of this growth is providing quality assistance to students during in-class and out-of-class programming exercises. Prior work proposes using instructor dashboards to help instructors combat these challenges. Further, the introduction of ChatGPT represents an exciting avenue to assist instructors with programming exercises but needs a delivery method for this assistance. We propose a revision of a current instructor dashboard Assistant Dashboard Plus that extends an existing dashboard with two new features: (a) identifying students in difficulty so that instructors can effectively assist them, and (b) providing instructors with pedagogically relevant groupings of students’ exercise solutions with similar implementations so that instructors can provide overlapping code style feedback to students within the same group. For difficulty detection, it uses a state-of-the-art algorithm for which a visualization has not been created. For code clustering, it uses GPT. We present a first-pass implementation of this dashboard 
    more » « less
  3. As part of a 3-day workshop on training faculty members in concurrency, we developed a module for hands-on training in Java Fork-Join abstractions that had several related novel pedagogical and technical components: (1) Source and runtime checks that (a) tested whether test-aware code created by the trainees met the expected requirements and (b) logged their results in the local file system and the IBM cloud. (2) Editable worked example code along with a guide on how to understand the underlying concepts behind the code and experiment with the code. (3) The ability to follow the guide (a) synchronously, with graduate student help, in a session devoted to this module, and (b) asynchronously, on one's own, before or after the synchronous session. (4) Assignments trainees could do after experimenting with the worked example. (5) Zoom recording of the entire synchronous session. Fourteen faculty members across the country attended the session and had varying amounts of knowledge of Java and automatic assessment. Data gathered from check logs and a Zoom recording, together with novel visualizations of them, provide information to evaluate our pedagogical model and differentiate the participants. 
    more » « less
  4. null (Ed.)
  5. During the Covid pandemic, we gave a Java assignment that exercised threads, synchronization, and coordination and wrote tests to check each concurrency aspect of the assignment. We used four different technologies to record events related to work on this assignment: the Piazza discussion forum, the Zoom conferencing system, an Eclipse plugin, and a testing framework. The recorded data have given the instructors of the course broad awareness of several aspects of student work: How much time did a student spend on an assignment? How many attempts students made on thread, synchronization, and coordination tests before they reached their final scores? How many times did they go to Piazza or use Zoom-supported office-hour visits to fix concurrency problems, and what was the nature of these problems? How effective was Zoom transcription to classify the office hour problems? How long and effective were the office hour visits, and to what extent was screen sharing used during these visits? To what extent did students use the tests to determine if they had met assignment requirements? These data, in turn, have provided us with preliminary answers to a variety of questions we had about unseen work and the concurrency aspects of the assignment. While the answers may be specific to our assignment, the questions answered by these mechanisms can be expected to apply to other settings. 
    more » « less
  6. Existing techniques for automating the testing of sequential programming assignments are fundamentally at odds with concurrent programming as they are oblivious to the algorithm used to implement the assignments. We have developed a framework that addresses this limitation for those object-based concurrent assignments whose user-interface (a) is implemented using the observer pattern and (b) makes apparent whether concurrency requirements are met. It has two components. The first component reduces the number of steps a human grader needs to take to interact with and score the user-interfaces of the submitted programs. The second component completely automates assessment by observing the events sent by the student-implemented observable objects. Both components are used to score the final submission and log interaction. The second component is also used to provide feedback during assignment implementation. Our experience shows that the framework is used extensively by students, leads to more partial credit, reduces grading time, and gives statistics about incremental student progress 
    more » « less